chore: upgrade Flink to 2.2.0 and CP to 8.2.0 in Docker deployment#136
chore: upgrade Flink to 2.2.0 and CP to 8.2.0 in Docker deployment#136Dave Troiano (davetroiano) wants to merge 1 commit intomasterfrom
Conversation
|
🎉 All Contributor License Agreements have been signed. Ready to merge. |
There was a problem hiding this comment.
Pull request overview
Upgrades the Docker-based Flink SQL tutorial deployment to Flink 2.2.0 and Confluent Platform 8.2.0, aligning container images and Flink configuration for the newer runtime.
Changes:
- Bump Flink base images / build version to
2.2.0-scala_2.12-java17. - Replace legacy
flink-conf.yamlwith a new hierarchicalconfig.yamlunderdocker/flink/conf/. - Update
docker-compose-flinksql.ymlto CP 8.2.0 images and adjust exposed services/ports.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| docker/flink/conf/flink-conf.yaml | Removes the legacy Flink configuration file from the Docker deployment. |
| docker/flink/conf/config.yaml | Adds the Flink 2.x-style hierarchical configuration used by the Docker images. |
| docker/flink/Makefile | Updates the Flink build/push tag to 2.2.0-scala_2.12-java17. |
| docker/flink/Dockerfile.flink-sql-client-kafka | Updates the SQL client image to Flink 2.2.0 and refreshes downloaded connector JARs. |
| docker/flink/Dockerfile.flink-kafka | Updates the Flink runtime image to Flink 2.2.0 and refreshes downloaded connector JARs. |
| docker/docker-compose-flinksql.yml | Upgrades Kafka/Schema Registry images to 8.2.0 and updates the Flink service images to 2.2.0. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| parallelism: | ||
| default: 1 | ||
| rest: | ||
| address: flink-jobmanager |
There was a problem hiding this comment.
The REST endpoint is exposed on the host via 8081:8081 in docker-compose, but the Flink config here does not set a REST bind address. If Flink defaults to binding REST to localhost inside the container, the Web UI/REST API will not be reachable from outside the container. Consider explicitly setting the REST bind address (and bind port if needed) in this config (e.g., bind to 0.0.0.0).
| address: flink-jobmanager | |
| address: flink-jobmanager | |
| bind-address: 0.0.0.0 |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar; \ | ||
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar; |
There was a problem hiding this comment.
These images download connector JARs from Maven Central during the Docker build without any integrity verification (checksum/signature). This is a supply-chain risk and can also lead to non-reproducible builds if artifacts are ever replaced. Consider pinning and verifying checksums (or vendoring the artifacts / using a build that resolves deps via Gradle/Maven and copies them into the image).
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar; \ | |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar; | |
| wget -P /opt/flink/lib https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-kafka/4.0.1-2.0/flink-sql-connector-kafka-4.0.1-2.0.jar.sha512; \ | |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar; \ | |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar.sha512; \ | |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar; \ | |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar.sha512; \ | |
| cd /opt/flink/lib && \ | |
| sha512sum -c flink-sql-connector-kafka-4.0.1-2.0.jar.sha512 && \ | |
| sha512sum -c flink-json-2.2.0.jar.sha512 && \ | |
| sha512sum -c flink-sql-avro-confluent-registry-2.2.0.jar.sha512 && \ | |
| rm -f \ | |
| flink-sql-connector-kafka-4.0.1-2.0.jar.sha512 \ | |
| flink-json-2.2.0.jar.sha512 \ | |
| flink-sql-avro-confluent-registry-2.2.0.jar.sha512 |
| RUN wget -P /opt/flink/lib https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-kafka/4.0.1-2.0/flink-sql-connector-kafka-4.0.1-2.0.jar; \ | ||
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar; \ | ||
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar; |
There was a problem hiding this comment.
These images download connector JARs from Maven Central during the Docker build without any integrity verification (checksum/signature). This is a supply-chain risk and can also lead to non-reproducible builds if artifacts are ever replaced. Consider pinning and verifying checksums (or vendoring the artifacts / using a build that resolves deps via Gradle/Maven and copies them into the image).
| RUN wget -P /opt/flink/lib https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-kafka/4.0.1-2.0/flink-sql-connector-kafka-4.0.1-2.0.jar; \ | |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar; \ | |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar; | |
| RUN set -eux; \ | |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-kafka/4.0.1-2.0/flink-sql-connector-kafka-4.0.1-2.0.jar; \ | |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-kafka/4.0.1-2.0/flink-sql-connector-kafka-4.0.1-2.0.jar.sha512; \ | |
| (cd /opt/flink/lib && sha512sum -c flink-sql-connector-kafka-4.0.1-2.0.jar.sha512); \ | |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar; \ | |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar.sha512; \ | |
| (cd /opt/flink/lib && sha512sum -c flink-json-2.2.0.jar.sha512); \ | |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar; \ | |
| wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar.sha512; \ | |
| (cd /opt/flink/lib && sha512sum -c flink-sql-avro-confluent-registry-2.2.0.jar.sha512); \ | |
| rm -f /opt/flink/lib/*.sha512 |
| - | | ||
| FLINK_PROPERTIES= | ||
| jobmanager.rpc.address: flink-jobmanager | ||
| taskmanager.numberOfTaskSlots: 10 |
There was a problem hiding this comment.
FLINK_PROPERTIES here duplicates settings that appear to be baked into the image via /opt/flink/conf/config.yaml (e.g., jobmanager.rpc.address and taskmanager.numberOfTaskSlots). Keeping the same config in both places increases the chance they drift; consider relying on a single source of truth (either the config file or container-specific FLINK_PROPERTIES) or adding a comment explaining why overrides are needed.
| - | | ||
| FLINK_PROPERTIES= | ||
| jobmanager.rpc.address: flink-jobmanager | ||
| taskmanager.numberOfTaskSlots: 10 |
There was a problem hiding this comment.
FLINK_PROPERTIES here duplicates settings that appear to be baked into the image via /opt/flink/conf/config.yaml (e.g., jobmanager.rpc.address and taskmanager.numberOfTaskSlots). Keeping the same config in both places increases the chance they drift; consider relying on a single source of truth (either the config file or container-specific FLINK_PROPERTIES) or adding a comment explaining why overrides are needed.
No description provided.